perm filename DCH.TEX[TIM,LSP] blob sn#778675 filedate 1984-12-02 generic text, type T, neo UTF8
\parskip .1in plus 5pt \baselineskip 15pt \lineskip 1pt
\newbox\bigstrutbox
\setbox\bigstrutbox=\hbox{\vrule height10pt depth5.0pt width0pt}
\def\bigstrut{\relax\ifmmode\copy\bigstrutbox\else\unhcopy\bigstrutbox\fi}

These charts are globally normalized. For each benchmark, the fastest
time over all of the implementations is found. For each implementation reported,
its actual time is divided by the best time, and that figure appears in the chart.
Therefore, the best implementation will have a $1.0$ reported for its
time, and if some implementation is twice as slow as that one, there will be
a $2.0$ in the chart at that point.

The charts are broken up into 2~groups; {\bf CPU Time} and {\bf Real Time}.
{\bf CPU Time} is the best approximation to CPU time that was reported for that
implementation.
For the S-1, the LM-2, and the PERQ, the CPU time is approximated by real time; 
for InterLisp Vax~780, the CPU time is approximated by CPU time $+$ garbage
collection time; all others report CPU time directly.

{\bf Real Time} is the best approximation to real time available. For
PDP-10 MacLisp, the Symbolics 3600, the LM-2, the PERQ, and the S-1, this
is reported directly.  For the Dolphin, the Dandelion, the Dorado, PSL on
all machines, Franz on all machines, Common Lisp on the DEC machines, and
InterLisp Vax~780, this is CPU time $+$ garbage collection time.

In these charts, Franz time is reported for the Translink~$=$~T, LocalF~$=$~No
case.

\vfill\eject

\input dchart.tex

\bye